Goto

Collaborating Authors

 temporal-difference and q-learning learn representation


Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory

Neural Information Processing Systems

Temporal-difference and Q-learning play a key role in deep reinforcement learning, where they are empowered by expressive nonlinear function approximators such as neural networks. At the core of their empirical successes is the learned feature representation, which embeds rich observations, e.g., images and texts, into the latent space that encodes semantic structures. Meanwhile, the evolution of such a feature representation is crucial to the convergence of temporal-difference and Q-learning. In particular, temporal-difference learning converges when the function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise. We aim to answer the following questions: When the function approximator is a neural network, how does the associated feature representation evolve?


Review for NeurIPS paper: Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory

Neural Information Processing Systems

Additional Feedback: In the definition of the continuity equation, what does "div" stand for? And how it is defined? The definition of Q-hat in (3.1) implies that the activation function sigma is only applied in the first layer of the network. How much harder would the problem be to analyze if the second layer also applied an activation function? I guess dimensions D and d should be closely related, e.g.


Review for NeurIPS paper: Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory

Neural Information Processing Systems

The paper presents some new results regarding the convergence of TD and Q-learning when the action-value function is represented by overparameterized neural networks. The theoretical contribution made by this paper is seen as solid. The weakness described by the reviewers are not major and can be addressed in a minor revision and I therefore recommend accepting this paper.

  mean-field theory, neurips paper, temporal-difference and q-learning learn representation

Can Temporal-Difference and Q-Learning Learn Representation? A Mean-Field Theory

Neural Information Processing Systems

Temporal-difference and Q-learning play a key role in deep reinforcement learning, where they are empowered by expressive nonlinear function approximators such as neural networks. At the core of their empirical successes is the learned feature representation, which embeds rich observations, e.g., images and texts, into the latent space that encodes semantic structures. Meanwhile, the evolution of such a feature representation is crucial to the convergence of temporal-difference and Q-learning. In particular, temporal-difference learning converges when the function approximator is linear in a feature representation, which is fixed throughout learning, and possibly diverges otherwise. We aim to answer the following questions: When the function approximator is a neural network, how does the associated feature representation evolve?